7 research outputs found

    Dynamic Hilbert clustering based on convex set for web services aggregation

    Get PDF
    In recent years, web services run by big corporations and different application-specific data centers have all been embraced by several companies worldwide. Web services provide several benefits when compared to other communication technologies. However, it still suffers from congestion and bottlenecks as well as a significant delay due to the tremendous load caused by a large number of web service requests from end users. Clustering and then aggregating similar web services as one compressed message can potentially achieve network traffic reduction. This paper proposes a dynamic Hilbert clustering as a new model for clustering web services based on convex set similarity. Mathematically, the suggested models compute the degree of similarity between simple object access protocol (SOAP) messages and then cluster them into groups with high similarity. Next, each cluster is aggregated as a compact message that is finally encoded by fixed-length or Huffman. The experiment results have shown the suggested model performs better than the conventional clustering techniques in terms of compression ratio. The suggested model has produced the best results, reaching up to 15 with fixed-length and up to 20 with Huffma

    Health Electroencephalogram epileptic classification based on Hilbert probability similarity

    Get PDF
    This paper has proposed a new classification method based on Hilbert probability similarity to detect epileptic seizures from electroencephalogram (EEG) signals. Hilbert similarity probability-based measure is exploited to measure the similarity between signals. The proposed system consisted of models based on Hilbert probability similarity (HPS) to predict the state for the specific EEG signal. Particle swarm optimization (PSO) has been employed for feature selection and extraction. Furthermore, the used dataset in this study is Bonn University's publicly available EEG dataset. Several metrics are calculated to assess the performance of the suggested systems such as accuracy, precision, recall, and F1-score. The experimental results show that the suggested model is an effective tool for classifying EEG signals, with an accuracy of up to 100% for two-class status

    Survey analysis for optimization algorithms applied to electroencephalogram

    Get PDF
    This paper presents a survey for optimization approaches that analyze and classify Electroencephalogram (EEG) signals. The automatic analysis of EEG presents a significant challenge due to the high-dimensional data volume. Optimization algorithms seek to achieve better accuracy by selecting practical features and reducing unwanted features. Forty-seven reputable research papers are provided in this work, emphasizing the developed and executed techniques divided into seven groups based on the applied optimization algorithm particle swarm optimization (PSO), ant colony optimization (ACO), artificial bee colony (ABC), grey wolf optimizer (GWO), Bat, Firefly, and other optimizer approaches). The main measures to analyze this paper are accuracy, precision, recall, and F1-score assessment. Several datasets have been utilized in the included papers like EEG Bonn University, CHB-MIT, electrocardiography (ECG) dataset, and other datasets. The results have proven that the PSO and GWO algorithms have achieved the highest accuracy rate of around 99% compared with other techniques

    Fractal feature selection model for enhancing high-dimensional biological problems

    Get PDF
    The integration of biology, computer science, and statistics has given rise to the interdisciplinary field of bioinformatics, which aims to decode biological intricacies. It produces extensive and diverse features, presenting an enormous challenge in classifying bioinformatic problems. Therefore, an intelligent bioinformatics classification system must select the most relevant features to enhance machine learning performance. This paper proposes a feature selection model based on the fractal concept to improve the performance of intelligent systems in classifying high-dimensional biological problems. The proposed fractal feature selection (FFS) model divides features into blocks, measures the similarity between blocks using root mean square error (RMSE), and determines the importance of features based on low RMSE. The proposed FFS is tested and evaluated over ten high-dimensional bioinformatics datasets. The experiment results showed that the model significantly improved machine learning accuracy. The average accuracy rate was 79% with full features in machine learning algorithms, while FFS delivered promising results with an accuracy rate of 94%

    Jaccard-based Random Distribution with Least and Most Significant Bit Hiding Methods for Highly Patients MRI Protected Privacy

    No full text
    In this study, the main goal is to improve patient care by making it easier for patient data and pictures to be sent between medical centers without problems. Still, one of the biggest problems with telemedicine is keeping patient information private and ensuring data is safe. This is especially important because even small changes to patient information could have serious consequences, such as wrong evaluations and lower-quality care. This study develops a new model that uses the unique Jaccard distribution of the least significant bit (LSB) and the most significant bit (MSB) to solve this complex problem. The goal of this model is to hide much information about a patient in the background of an MRI cover picture. The careful creation of this model is a crucial part of the current study, as it will ensure a solid way to hide information securely. A more advanced method is also suggested, which involves randomly putting private text in different places on the cover picture. This plan is meant to strengthen security steps and keep private patient information secret. The peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and the mean square error (MSE) all improved significantly when this method was tested in the real world. With these convincing results, the study shows telemedicine is more effective than traditional methods for keeping patient data safe. This proves that the model and method shown have the potential to greatly improve patient privacy and data accuracy in telemedicine systems, which would improve the general quality of health care
    corecore